92 research outputs found

    3D Cameras: 3D Computer Vision of Wide Scope

    Get PDF
    The human visual sense is the one among all other senses that gathers most information we receive. Evolution has optimized our visual system to negotiate one's way in three dimensions even through cluttered environments. For perceiving 3D information, the human brain uses three important principles: stereo vision, motion parallax and a-priori knowledge about the perspective appearance of objects in dependency of their distance. These tasks pose a challenge to computer vision since decades. Today the most common techniques for 3D sensing are CCD- or CMOS-camera, laser scanner or 3D time-of-flight camera based. Even though evolution has shown predominance for passive stereo vision systems, three additional problems are remaining for 3D perception compared with the two mentioned active vision systems above. First, the computation needs a great deal of performance, since the correlation of two images from a different point of view has to be found. Second, distances to structureless surfaces cannot be measured, if the perspective projection of the object is larger than the camera’s field of view. This problem is often called aperture problem. Finally, a passive visual sensor has to cope with shadowing effects and changes in illumination over time. That is why for mapping purposes mostly active vision systems like laser scanners are used , e.g. [Thrun et al., 2000], [Wulf & Wagner, 2003], [Surmann et al., 2003]. But these approaches are usually not applicable to tasks considering environment dynamics. Due to this restriction, 3D cameras [CSEM SA, 2007], [PMDTec, 2007] have attracted attention since their invention nearly a decade ago. Distance measurements are also based on a time-of-flight principle but with an important difference. Instead of sampling laser beams serially to acquire distance data point-wise, the entire scene is measured in parallel with a modulated surface. This principle allows for higher frame rates and thus enables the consideration of environment dynamics. The first part of this chapter discusses the physical principles of 3D sensors, which are commonly used in the robotics community for typical problems like mapping and navigation. The second part concentrates on 3D cameras, their assets, drawbacks and perspectives. Based on these examining parts, some solutions are discussed that handle common problems occurring in dynamic environments with changing lighting conditions. Finally, it is shown in the last part of this chapter how 3D cameras can be applied to mapping, object localization and feature tracking tasks

    3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation

    Full text link
    Global registration of heterogeneous ground and aerial mapping data is a challenging task. This is especially difficult in disaster response scenarios when we have no prior information on the environment and cannot assume the regular order of man-made environments or meaningful semantic cues. In this work we extensively evaluate different approaches to globally register UGV generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud maps from vision sensors. The approaches are realizations of different selections for: a) local features: key-points or segments; b) descriptors: FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR. Additionally, we compare the results against standard approaches like applying ICP after a good prior transformation has been given. The evaluation criteria include the distance which a UGV needs to travel to successfully localize, the registration error, and the computational cost. In this context, we report our findings on effectively performing the task on two new Search and Rescue datasets. Our results have the potential to help the community take informed decisions when registering point-cloud maps from ground robots to those from aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017

    Deployment of Aerial Robots during the Flood Disaster in Erftstadt / Blessem in July 2021

    Full text link
    Climate change is leading to more and more extreme weather events such as heavy rainfall and flooding. This technical report deals with the question of how rescue commanders can be better and faster provided with current information during flood disasters using Unmanned Aerial Vehicles (UAVs), i.e. during the flood in July 2021 in Central Europe, more specifically in Erftstadt / Blessem. The UAVs were used for live observation and regular inspections of the flood edge on the one hand, and on the other hand for the systematic data acquisition in order to calculate 3D models using Structure from Motion and MultiView Stereo. The 3D models embedded in a GIS application serve as a planning basis for the systematic exploration and decision support for the deployment of additional smaller UAVs but also rescue forces. The systematic data acquisition of the UAVs by means of autonomous meander flights provides high-resolution images which are computed to a georeferenced 3D model of the surrounding area within 15 minutes in a specially equipped robotic command vehicle (RobLW). From the comparison of high-resolution elevation profiles extracted from the 3D model on successive days, changes in the water level become visible. This information enables the emergency management to plan further inspections of the buildings and to search for missing persons on site.Comment: 6 papge

    Obtaining Robust Control and Navigation Policies for Multi-Robot Navigation via Deep Reinforcement Learning

    Full text link
    Multi-robot navigation is a challenging task in which multiple robots must be coordinated simultaneously within dynamic environments. We apply deep reinforcement learning (DRL) to learn a decentralized end-to-end policy which maps raw sensor data to the command velocities of the agent. In order to enable the policy to generalize, the training is performed in different environments and scenarios. The learned policy is tested and evaluated in common multi-robot scenarios like switching a place, an intersection and a bottleneck situation. This policy allows the agent to recover from dead ends and to navigate through complex environments.Comment: 13 page

    Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles,” Int

    Get PDF
    Abstract—This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV

    Fast Color-Independent Ball Detection for Mobile Robots

    Get PDF
    This paper presents a novel scheme for fast color invariant ball detection in the RoboCup context. Edge ïŹltered camera images serve as an input for an Ada Boost learning procedure that constructs a cascade of classiïŹcation and regression trees (CARTs). Our system is capable to detect different soccer balls in the RoboCup and other environments. The resulting approach for object classiïŹcation is real-time capable and reliable
    • 

    corecore